Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference
نویسنده
چکیده
It is well known that when a connectionist network is trained on one set of patterns and then attempts to add new patterns to its repertoire, catastrophic interference may result. The use of sparse, orthogonal hidden-layer representations has been shown to reduce catastrophic interference. The author demonstrates that the use of sparse representations may, in certain cases, actually result in worse performance on catastrophic interference. This paper argues for the necessity of maintaining hidden-layer representations that are both as highly distributed and as highly orthogonal as possible. The author presents a learning algorithm, called context-biasing, that dynamically solves the problem of constraining hiddenlayer representations to simultaneously produce good orthogonality and distributedness. On the data tested for this study, context-biasing is shown to reduce catastrophic interference by more than 50% compared to standard backpropagation. In particular, this technique succeeds in reducing catastrophic interference on data where sparse, orthogonal distributions failed to produce any
منابع مشابه
Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks
In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct consequence of the overlap of the system’s distr...
متن کاملInteractive tandem networks and the sequential learning problem
This paper presents a novel connectionist architecture to handle the "sensitivity-stability" problem and, in particular, an extreme manifestation of the problem, catastrophic interference. This architecture, called an interactive tandem-network (ITN) architecture, consists of two continually interacting networks, one — the LTM network — dynamically storing "prototypes" of the patterns learned, ...
متن کاملCatastrophic Interference is Eliminated in Pretrained Networks
When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic interference (McCloskey & Cohen,1989; Ratcliff, 1990). The main cause of this interference is overlap among representations at the hidden unit layer (French, 1991; Hetherington,1991; Murre, 1992). This ca...
متن کاملCatastrophic Interference in Connectionist Networks: Can It Be Predicted, Can It Be Prevented?
Catastrophic forgetting occurs when connectionist networks learn new information, and by so doing, forget all previously learned information. This workshop focused primarily on the causes of catastrophic interference, the techniques that have been developed to reduce it, the effect of these techniques on the networks' ability to generalize, and the degree to which prediction of catastrophic for...
متن کاملEffect of Sharpening on Hidden-Layer Activation Profiles Figure 2: Activation Profiles with various node-sharpenings
In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct consequence of the overlap of the system’s distr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994